Adaptive gradient descent without descent
Yura Malitsky (Linköping University)
19-May-2021, 07:00-08:00 (5 years ago)
Abstract: In this talk I will present some recent results for the most classical optimization method — gradient descent. We will show that a simple zero cost rule is sufficient to completely automate gradient descent. The method adapts to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. The presentation is based on a joint work with K. Mishchenko, see arxiv.org/abs/1910.09529.
optimization and control
Audience: researchers in the topic
Variational Analysis and Optimisation Webinar
Series comments: Register on www.mocao.org/va-webinar/ to receive information about the zoom connection.
| Organizers: | Hoa Bui*, Matthew Tam*, Minh Dao, Alex Kruger, Vera Roshchina*, Guoyin Li |
| *contact for this listing |
Export talk to
